iT邦幫忙

2021 iThome 鐵人賽

DAY 29
0
AI & Data

Attention到底在關注什麼?系列 第 29

Day 29 利用transformer自己實作一個翻譯程式(十一) Decoder layer

  • 分享至 

  • xImage
  •  

每個解碼器都包含幾個子層

  1. Masked multi-head attention(包含look ahead mask跟padding mask)
  2. Multi-head attention(有padding mask)
    V(value)和K(key)接收encoder的輸出當作輸入,Q(query)接收來自masked multi-head attention子層的輸出
  3. Point wise feed forward networks

這些子層的周圍都有一個殘差連接,然後做正規化

每一個子層的輸出是LayerNorm(x + Sublayer(x))

正規化是在d_model的軸上完成的

Transformer中共有N個Encoder layer

由於Q接收decoder第一個attention區塊的輸出,而K接收encoder的輸出,注意力權重表示encoder輸出對decoder輸入的重要性。換句話來說,decoder通過查看encoder的輸出並自我注意自己的輸出來預測下一個標記

class DecoderLayer(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads, dff, rate=0.1):
    super(DecoderLayer, self).__init__()

    self.mha1 = MultiHeadAttention(d_model, num_heads)
    self.mha2 = MultiHeadAttention(d_model, num_heads)

    self.ffn = point_wise_feed_forward_network(d_model, dff)

    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)

    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)
    self.dropout3 = tf.keras.layers.Dropout(rate)

  def call(self, x, enc_output, training,
           look_ahead_mask, padding_mask):
    # enc_output.shape == (batch_size, input_seq_len, d_model)

    attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask)  # (batch_size, target_seq_len, d_model)
    attn1 = self.dropout1(attn1, training=training)
    out1 = self.layernorm1(attn1 + x)

    attn2, attn_weights_block2 = self.mha2(
        enc_output, enc_output, out1, padding_mask)  # (batch_size, target_seq_len, d_model)
    attn2 = self.dropout2(attn2, training=training)
    out2 = self.layernorm2(attn2 + out1)  # (batch_size, target_seq_len, d_model)

    ffn_output = self.ffn(out2)  # (batch_size, target_seq_len, d_model)
    ffn_output = self.dropout3(ffn_output, training=training)
    out3 = self.layernorm3(ffn_output + out2)  # (batch_size, target_seq_len, d_model)

    return out3, attn_weights_block1, attn_weights_block2
sample_decoder_layer = DecoderLayer(512, 8, 2048)

sample_decoder_layer_output, _, _ = sample_decoder_layer(
    tf.random.uniform((64, 50, 512)), sample_encoder_layer_output,
    False, None, None)

sample_decoder_layer_output.shape  # (batch_size, target_seq_len, d_model)
TensorShape([64, 50, 512])

上一篇
Day 28 利用transformer自己實作一個翻譯程式(十) Encoder layer
下一篇
Day 30 完賽心得
系列文
Attention到底在關注什麼?30
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言